Object-goal navigation (Object-nav) entails searching, recognizing and navigating to a target object. Object-nav has been extensively studied by the Embodied-AI community, but most solutions are often restricted to considering static objects (e.g., television, fridge, etc.). We propose a modular framework for object-nav that is able to efficiently search indoor environments for not just static objects but also movable objects (e.g. fruits, glasses, phones, etc.) that frequently change their positions due to human intervention. Our contextual-bandit agent efficiently explores the environment by showing optimism in the face of uncertainty and learns a model of the likelihood of spotting different objects from each navigable location. The likelihoods are used as rewards in a weighted minimum latency solver to deduce a trajectory for the robot. We evaluate our algorithms in two simulated environments and a real-world setting, to demonstrate high sample efficiency and reliability.
translated by 谷歌翻译
We address the problem of few-shot classification where the goal is to learn a classifier from a limited set of samples. While data-driven learning is shown to be effective in various applications, learning from less data still remains challenging. To address this challenge, existing approaches consider various data augmentation techniques for increasing the number of training samples. Pseudo-labeling is commonly used in a few-shot setup, where approximate labels are estimated for a large set of unlabeled images. We propose DiffAlign which focuses on generating images from class labels. Specifically, we leverage the recent success of the generative models (e.g., DALL-E and diffusion models) that can generate realistic images from texts. However, naive learning on synthetic images is not adequate due to the domain gap between real and synthetic images. Thus, we employ a maximum mean discrepancy (MMD) loss to align the synthetic images to the real images minimizing the domain gap. We evaluate our method on the standard few-shot classification benchmarks: CIFAR-FS, FC100, miniImageNet, tieredImageNet and a cross-domain few-shot classification benchmark: miniImageNet to CUB. The proposed approach significantly outperforms the stateof-the-art in both 5-shot and 1-shot setups on these benchmarks. Our approach is also shown to be effective in the zero-shot classification setup
translated by 谷歌翻译
Consider a scenario in one-shot query-guided object localization where neither an image of the object nor the object category name is available as a query. In such a scenario, a hand-drawn sketch of the object could be a choice for a query. However, hand-drawn crude sketches alone, when used as queries, might be ambiguous for object localization, e.g., a sketch of a laptop could be confused for a sofa. On the other hand, a linguistic definition of the category, e.g., a small portable computer small enough to use in your lap" along with the sketch query, gives better visual and semantic cues for object localization. In this work, we present a multimodal query-guided object localization approach under the challenging open-set setting. In particular, we use queries from two modalities, namely, hand-drawn sketch and description of the object (also known as gloss), to perform object localization. Multimodal query-guided object localization is a challenging task, especially when a large domain gap exists between the queries and the natural images, as well as due to the challenge of combining the complementary and minimal information present across the queries. For example, hand-drawn crude sketches contain abstract shape information of an object, while the text descriptions often capture partial semantic information about a given object category. To address the aforementioned challenges, we present a novel cross-modal attention scheme that guides the region proposal network to generate object proposals relevant to the input queries and a novel orthogonal projection-based proposal scoring technique that scores each proposal with respect to the queries, thereby yielding the final localization results. ...
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译
This paper presents a framework for jointly grounding objects that follow certain semantic relationship constraints given in a scene graph. A typical natural scene contains several objects, often exhibiting visual relationships of varied complexities between them. These inter-object relationships provide strong contextual cues toward improving grounding performance compared to a traditional object query-only-based localization task. A scene graph is an efficient and structured way to represent all the objects and their semantic relationships in the image. In an attempt towards bridging these two modalities representing scenes and utilizing contextual information for improving object localization, we rigorously study the problem of grounding scene graphs on natural images. To this end, we propose a novel graph neural network-based approach referred to as Visio-Lingual Message PAssing Graph Neural Network (VL-MPAG Net). In VL-MPAG Net, we first construct a directed graph with object proposals as nodes and an edge between a pair of nodes representing a plausible relation between them. Then a three-step inter-graph and intra-graph message passing is performed to learn the context-dependent representation of the proposals and query objects. These object representations are used to score the proposals to generate object localization. The proposed method significantly outperforms the baselines on four public datasets.
translated by 谷歌翻译
尽管机器学习方法在其培训领域表现良好,但通常在现实世界中往往会失败。在心血管磁共振成像(CMR)中,呼吸运动代表了采集质量以及随后的分析和最终诊断的主要挑战。我们提出了一个工作流程,该工作流程预测CMRXMOTION挑战2022的CMR中呼吸运动的严重程度得分。这是技术人员在获取过程中立即提供有关CMR质量的反馈的重要工具,因为可以直接重新获得质量较差的图像,同时还可以重新获得质量。该患者在附近仍有可用。因此,我们的方法可确保获得的CMR在用于进一步诊断之前达到特定的质量标准。因此,在严重运动人工制品的情况下,它可以有效地进行适当诊断的有效基础。结合我们的细分模型,这可以通过提供完整的管道来保证适当的质量评估和对心血管扫描的真实细分来帮助心脏病专家和技术人员的日常工作。代码库可在https://github.com/meclabtuda/qa_med_data/tree/dev_qa_cmrxmotion获得。
translated by 谷歌翻译
在胸部计算机断层扫描(CT)扫描中,自动分割地面玻璃的不透明和固结可以在高资源利用时期减轻放射科医生的负担。但是,由于分布(OOD)数据默默失败,深度学习模型在临床常规中不受信任。我们提出了一种轻巧的OOD检测方法,该方法利用特征空间中的Mahalanobis距离,并无缝集成到最新的分割管道中。简单的方法甚至可以增加具有临床相关的不确定性定量的预训练模型。我们在四个胸部CT分布偏移和两个磁共振成像应用中验证我们的方法,即海马和前列腺的分割。我们的结果表明,所提出的方法在所有探索场景中有效地检测到遥远和近型样品。
translated by 谷歌翻译
大多数持续学习方法都在明确定义任务边界并在培训和测试过程中可用的任务标识信息的设置中进行了验证。我们探讨了这种方法在任务不足的环境中的性能,该环境更像动态临床环境,并逐渐变化。我们提出了Odex,这是一种整体解决方案,将分布外检测与持续学习技术相结合。在海马分割的两种情况下进行验证表明,我们提出的方法可靠地维持早期任务的性能而不会失去可塑性。
translated by 谷歌翻译
网络值时间序列是目前的网络数据的常见形式。然而,研究由网络价值随机过程产生的网络序列的总体行为相对较少。现有的大多数研究都集中在简单的设置上,其中网络在整个时间内是独立的(或有条件独立的),并且所有边缘在每个时间步骤均同步更新。在本文中,我们研究了聚集的邻接矩阵的浓度特性以及与懒惰网络值随机过程产生的网络序列相关的相应拉普拉斯矩阵,其中边缘异步不断地更新,并且每个边缘都遵循其懒惰的随机过程,以更新独立于其更新其他边缘。我们证明了这些集中度的有用性,从而证明了标准估计器在社区估计和变更点估计问题中的一致性。我们还进行了一项仿真研究,以证明懒惰参数的影响,该参数控制时间相关的程度,对社区和变化点估计的准确性。
translated by 谷歌翻译
Frodo是一个易于使用且灵活的框架,用于数字病理中的分布外检测任务。它可以与Pytorch分类和分割模型一起使用,其模块化设计可轻松扩展。目标是自动化OOD评估的任务,以便研究可以集中于设计新模型,新方法或评估新数据集的主要目标。可以在https://github.com/meclabtuda/froodo上找到该代码。
translated by 谷歌翻译